"Debian 30 years of collective intelligence" -Maqsuel Maqson Brazil
The cake is there. :) Honorary Debian Developers: Buzz, Jessie, and Woody welcome guests to this amazing party. Sao Carlos, state of Sao Paulo, Brazil Stickers, and Fliers, and Laptops, oh my! Belo Horizonte, Brazil Bras lia, Brazil Bras lia, Brazil Mexico 30 a os! A quick Selfie We do not encourage beverages on computing hardware, but this one is okay by us. Germany
The German Delegation is also looking for this dog who footed the bill for the party, then left mysteriously.
We brought the party back inside at CCCamp Belgium
Cake and Diversity in Belgium El Salvador
Food and Fellowship in El Salvador South Africa
Debian is also very delicious!
All smiles waiting to eat the cake Reports Debian Day 30 years in Macei - Brazil Debian Day 30 years in S o Carlos - Brazil Debian Day 30 years in Pouso Alegre - Brazil Debian Day 30 years in Belo Horizonte - Brazil Debian Day 30 years in Curitiba - Brazil Debian Day 30 years in Bras lia - Brazil Debian Day 30 years online in Brazil Articles & Blogs Happy Debian Day - going 30 years strong - Liam Dawe Debian Turns 30 Years Old, Happy Birthday! - Marius Nestor 30 Years of Stability, Security, and Freedom: Celebrating Debian s Birthday - Bobby Borisov Happy 30th Birthday, Debian! - Claudio Kuenzier Debian is 30 and Sgt Pepper Is at Least Ninetysomething - Christine Hall Debian turns 30! -Corbet Thirty years of Debian! - Lennart Hengstmengel Debian marks three decades as 'Universal Operating System' - Sam Varghese Debian Linux Celebrates 30 Years Milestone - Joshua James 30 years on, Debian is at the heart of the world's most successful Linux distros - Liam Proven Looking Back on 30 Years of Debian - Maya Posch Cheers to 30 Years of Debian: A Journey of Open Source Excellence - arindam Discussions and Social Media Debian Celebrates 30 Years - Source: News YCombinator Brand-new Linux release, which I'm calling the Debian ... Source: News YCombinator Comment: Congrats @debian !!! Happy Birthday! Thank you for becoming a cornerstone of the #opensource world. Here's to decades of collaboration, stability & #software #freedom -openSUSELinux via X (formerly Twitter) Comment: Today we #celebrate the 30th birthday of #Debian, one of the largest and most important cornerstones of the #opensourcecommunity. For this we would like to thank you very much and wish you the best for the next 30 years! Source: X (Formerly Twitter -TUXEDOComputers via X (formerly Twitter) Happy Debian Day! - Source: Reddit.com Video The History of Debian The Beginning - Source: Linux User Space Debian Celebrates 30 years -Source: Lobste.rs Video Debian At 30 and No More Distro Hopping! - LWDW388 - Source: LinuxGameCast Debian Celebrates 30 years! - Source: Debian User Forums Debian Celebrates 30 years! - Source: Linux.org
update-alternatives --config x-www-browser
was pointing at Firefox already, of course.xdg-mime query default text/html
delivered firefox-esr.desktop
, of course.
Still nearly every link opens in Chromium...
As usually the answer is out there. In this case in a xfce4-terminal bug report from 2015.
The friendly "runkharr" has debugged the issue and provides the fix as well.
As usually, all very easy once you know where to look. And why to hate GTK again a bit more:
The GTK function gtk_show_uri()
uses glib's g_app_info_launch_default_for_uri()
and that - of course - cannot respect the usual mimetype setting.
So quoting "runkharr" verbatim:
1. Create a file exo-launch.desktop in your ~/.local/share/applications directory with something like the following content: [Desktop Entry] Name=Exo Launcher Type=Application Icon=gtk-open Categories=Desktop; Comment=A try to force 'xfce4-terminal' to use the preferred application(s) GenericName=Exo Launcher Exec=exo-open %u MimeType=text/html;application/xhtml+xml;x-scheme-handler/http;x-scheme-handler/https;x-scheme-handler/ftp;application/x-mimearchive; Terminal=false OnlyShowIn=XFCE; 2. Create (if not already existing) a local defaults.list file, again in your ~/.local/share/applications directory. This file must start with a "group header" of [Default Applications] 3. Insert the following three lines somewhere below this [Default Applications] group header [..]: x-scheme-handler/http=exo-launch.desktop; x-scheme-handler/https=exo-launch.desktop; x-scheme-handler/ftp=exo-launch.desktop;And ... links open in Firefox again. Thank you "runkharr"!
dmesg
log showed entries that looked suspicious:
Googleing these error -110
and error -71
is a bit hard. Now why the USB driver does not give useful error messages instead of archaic errno
-style numbers escapes me. This is not the 80s anymore.
The wisdom of the crowd says error -110
is something around "the USB port power supply was exceeded" [source].
Now lsusb -tv
shows device 1-7 ... to be my USB keyboard. I somehow doubt that wants more power than the hub is willing to provide.
The Archlinux BBS Forums recommend to piece together information from drivers/usb/host/ohci.h
and (updated from their piece which is from 2012) /tools/include/uapi/asm-generic/errno.h
. This is why some people then consider -110
to mean "Connection timed out". Nah, not likely either.
Reading through the kernel source around drivers/usb/host
did not enlighten me either. To the contrary. Uuugly. There seems to be no comprehensive list what these error codes mean. And the numbers are assigned to errors conditions quite arbitrarily. And - of course - there is no documentation. "It was hard to do, so it should be hard to understand as well."
Luckily some of the random musings I read through contained some curious advice: power cycle the host. So I did and that did not make the error go away. Other people insisted on removing cables out of wall sockets, unplugging everything and conducting esoteric rituals. That made it dawn on me, the mainboard of course nicely powers the USB in "off" state, too. So switching the power supply off (yes, these have a separate switch, go find yours), waiting a bit for capacitors to drain and switching things back on and ... the errors were gone, the system booted within seconds again.
So the takeaway message: If you get random error messages like
Courtesy of my CRANberries, there is also a diffstat report for this release. For questions, suggestions, or issues please use the issue tracker at the GitHub repo. If you like this or other open-source work I do, you can now sponsor me at GitHub.Changes in version 0.1.10 (2023-05-14)
- simdjson was upgraded to version 3.1.8 (Dirk in #85).
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.
kubectl
may have a completely different impact on the API depending on
usage, for example when listing the whole list of objects (very expensive) vs a single object.
The conclusion was to try avoiding hitting the api-server with LIST calls, and use ResourceVersion which
avoids full-dumps from etcd (which, by the way, is the default when using bare kubectl get
calls). I
already knew some of this, and for example the jobs-framework-emailer was already making use of this
ResourceVersion functionality.
There have been a lot of improvements in the performance side of Kubernetes in recent times, or more
specifically, in how resources are managed and used by the system. I saw a review of resource management from
the perspective of the container runtime and kubelet, and plans to support fancy things like topology-aware
scheduling decisions and dynamic resource claims (changing the pod resource claims without
re-defining/re-starting the pods).
On cluster management, bootstrapping and multi-tenancy
I attended a couple of talks that mentioned kubeadm, and one in particular was from the maintainers
themselves. This was of interest to me because as of today we use it for
Toolforge. They shared all
the latest developments and improvements, and the plans and roadmap for the future, with a special mention to
something they called kubeadm operator , apparently capable of auto-upgrading the cluster, auto-renewing
certificates and such.
I also saw a comparison between the different cluster bootstrappers, which to me confirmed that kubeadm was
the best, from the point of view of being a well established and well-known workflow, plus having a very
active contributor base. The kubeadm developers invited the audience to submit feature requests,
so I did.
The different talks confirmed that the basic unit for multi-tenancy in kubernetes is the namespace. Any
serious multi-tenant usage should leverage this. There were some ongoing conversations, in official sessions
and in the hallway, about the right tool to implement K8s-whitin-K8s, and vcluster
was mentioned enough times for me to be convinced it was the right candidate. This was despite of my impression
that multiclusters / multicloud are regarded as hard topics in the general community. I definitely would like to play
with it sometime down the road.
On networking
I attended a couple of basic sessions that served really well to understand how Kubernetes instrumented the
network to achieve its goal. The conference program had sessions to cover topics ranging from network
debugging recommendations, CNI implementations, to IPv6 support. Also, one of the keynote sessions had a
reference to how kube-proxy is not able to perform NAT for SIP connections, which is interesting because I
believe Netfilter Conntrack could do it if properly configured. One of the conclusions on the CNI front was
that Calico has a massive community adoption (in Netfilter mode), which is reassuring, especially considering
it is the one we use for Toolforge Kubernetes.
On jobs
I attended a couple of talks that were related to HPC/grid-like usages of Kubernetes. I was truly impressed
by some folks out there who were using Kubernetes Jobs on massive scales, such as to train machine learning
models and other fancy AI projects.
It is acknowledged in the community that the early implementation of things like Jobs and CronJobs had some
limitations that are now gone, or at least greatly improved. Some new functionalities have been added as
well. Indexed Jobs, for example, enables each Job to have a number (index) and process a chunk of a larger
batch of data based on that index. It would allow for full grid-like features like sequential (or again,
indexed) processing, coordination between Job and more graceful Job restarts. My first reaction was: Is that
something we would like to enable in Toolforge Jobs Framework?
On policy and security
A surprisingly good amount of sessions covered interesting topics related to policy and security. It was nice
to learn two realities:
u32
type, which can only store integers between zero and about four billion (232 - 1, to be precise).
There s also lots of little things that need to be just right, also, like translating the different memory management approaches of the languages, and dealing with a myriad of fiddly little issues like passing arguments and return values in and out of method calls, helpers for defining classes and methods (and pointing to the correct Rust functions), and so on.
All in all, these libraries are fairly significant pieces of work, and I m mighty glad that someone else has taken on the job of building (and maintaining!) them.
self
reference to the current object):
fn enquo_field_decrypt_text(ciphertext_obj: RString, context_obj: RString) -> RString
let ciphertext = ciphertext_obj.to_str_unchecked();
let context = context_obj.to_vec_u8_unchecked();
let field = rbself.get_data(&*FIELD_WRAPPER);
// etc etc etc
The equivalent in Magnus is just the function signature:
fn decrypt_text(&self, ciphertext: String, context: String) -> Result<String, magnus::Error>
You can also see there that Magnus signals an exception via the Result
return value, while Rutie s approach to raising an exception involves poking the Ruby VM directly, which always struck me as a bit ugly.
There are several other minor things in Magnus (like its cleaner approach to wrapping structs so they can be stored in Ruby objects) that I m appreciating, too.
Never discount the power of ergonomics for making a happy developer.
$ git diff --stat -- lib ext/enquo/src
ruby/ext/enquo/src/field.rs 342 ++++++++++++++++++++++++++++++++++++++
ruby/ext/enquo/src/lib.rs 338 ++++---------------------------------
ruby/ext/enquo/src/root.rs 39 +++++
ruby/ext/enquo/src/root_key.rs 67 ++++++++
ruby/lib/enquo.rb 6 +-
ruby/lib/enquo/field.rb 173 -------------------
ruby/lib/enquo/root.rb 28 ----
ruby/lib/enquo/root_key.rb 1 -
ruby/lib/enquo/root_key/static.rb 27 ---
9 files changed, 479 insertions(+), 542 deletions(-)
Considering that I was translating from a higher level language into a lower level one, the removal of so much code is quite remarkable.
Magnus was able to automagically replace rather a lot of raise ArgumentError if something.isnt_right
code in those .rb
files.
So, in conclusion, if you, too, are building Ruby extensions in Rust, while Rutie is a solid choice (and you probably should stick with it if you re already using it), I highly recommend giving Magnus a look for your next extension.
% mount -t ext4 -l /dev/mapper/foobar-root_1 on / type ext4 (rw,relatime,errors=remount-ro) % sudo pvs PV VG Fmt Attr PSize PFree /dev/mapper/md1_crypt foobar lvm2 a-- 445.95g 430.12g % sudo vgs VG #PV #LV #SN Attr VSize VFree foobar 1 2 0 wz--n- 445.95g 430.12g % sudo lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert root_1 foobar -wi-ao---- <14.90g % lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT [...] sdd 8:48 0 447.1G 0 disk sdd1 8:49 0 571M 0 part /boot/efi sdd2 8:50 0 488M 0 part md0 9:0 0 487M 0 raid1 /boot sdd3 8:51 0 446.1G 0 part md1 9:1 0 446G 0 raid1 md1_crypt 253:0 0 446G 0 crypt foobar-root_1 253:1 0 14.9G 0 lvm / [...] sdf 8:80 0 447.1G 0 disk sdf1 8:81 0 571M 0 part sdf2 8:82 0 488M 0 part md0 9:0 0 487M 0 raid1 /boot sdf3 8:83 0 446.1G 0 part md1 9:1 0 446G 0 raid1 md1_crypt 253:0 0 446G 0 crypt foobar-root_1 253:1 0 14.9G 0 lvm /The actual crypsetup configuration is:
% cat /etc/crypttab md1_crypt UUID=77246138-b666-4151-b01c-5a12db54b28b none luks,discardNow, to automatically open the crypto device during boot we can instead use:
% cat /etc/crypttab md1_crypt UUID=77246138-b666-4151-b01c-5a12db54b28b none luks,discard,keyscript=/etc/initramfs-tools/unlock.sh # touch /etc/initramfs-tools/unlock.sh # chmod 0700 /etc/initramfs-tools/unlock.sh # $EDITOR etc/initramfs-tools/unlock.sh # cat /etc/initramfs-tools/unlock.sh #!/bin/sh echo -n "provide_the_actual_password_here" # update-initramfs -k all -u [...]The server will then boot without prompting for a crypto password. Note that initramfs-tools by default uses an insecure umask of 0022, resulting in the initrd being accessible to everyone. But if you have the dropbear-initramfs package installed, its /usr/share/initramfs-tools/conf-hooks.d/dropbear sets UMASK=0077 , so the resulting /boot/initrd* file should automatically have proper permissions (0600). The cryptsetup hook warns about a permissive umask configuration during update-initramfs runs, but if you want to be sure, explicitly set it via e.g.:
# cat > /etc/initramfs-tools/conf.d/umask << EOF # restrictive umask to avoid non-root access to initrd: UMASK=0077 EOF # update-initramfs -k all -uDisclaimer: Of course you need to trust users with access to /etc/initramfs-tools/unlock.sh as well as the initramfs/initrd on your system. Furthermore you should wipe the boot partition (to destroy the keyfile information) before handing over such a disk. But that is a risk my customer can live with, YMMV.
Next.